Database architecture (R)evolution: New hardware vs. new software
نویسندگان
چکیده
The last few years have been exciting for data management system designers. The explosion in user and enterprise data coupled with the availability of newer, cheaper, and more capable hardware have lead system designers and researchers to rethink and, in some cases, reinvent the traditional DBMS architecture. In the space of data warehousing and analytics alone, more than a dozen new database product offerings have recently appeared, and dozens of research system papers are routinely published each year. Among these efforts, one school of thought promotes research on exploiting and anticipating new hardware (many-core CPUs [4, 7, 8], GPUs [3], FPGAs [5, 11], flash SSDs [6], other non-volatile storage technologies). Another school of thought focuses on software and algorithmic issues (column and hybrid stores [1, 10, 13], scale out architectures using commodity hardware [2, 9, 10, 13], optimizations in network and OS software stack [9]). And, at the same time, there are approaches that combine hardwarespecific optimizations with from-scratch database software design [12]. In this panel, we will ask our panelists, a mix of industry and academic experts, which of those trends will have lasting effects on database system design, and which directions hold the biggest potential for future research. We are particularly interested in the differences in views and approaches between academic and industrial research. Some of the questions that will be addressed during the panel are the following: Is the use of non-conventional CPUs, from GPUs to FPGAs and to custom chips, a research exercise, or a glimpse of things to come? In a few years, a computing node will feature 10s to 100s of cores, and data will fit in fast non-volatile storage. Data volume and workloads will scale orders of magnitude. How well are existing systems prepared for this? Scalability means different things to different people. What is it to you? Do academic researchers think “big” enough? Column-store start-ups are flourishing, yet “big” database vendors so far adhere to row-oriented data processing. Why? Are flash SSDs simply faster disks, or a whole new and exciting research playground? Scale out (shared nothing) database architectures are constantly gaining ground against scale up (shared memory/disk) ones. Will this trend hold or will history repeat itself?
منابع مشابه
FPGA Implementation of JPEG and JPEG2000-Based Dynamic Partial Reconfiguration on SOC for Remote Sensing Satellite On-Board Processing
This paper presents the design procedure and implementation results of a proposed hardware which performs different satellite Image compressions using FPGA Xilinx board. First, the method is described and then VHDL code is written and synthesized by ISE software of Xilinx Company. The results show that it is easy and useful to design, develop and implement the hardware image compressor using ne...
متن کاملOn Feasibility of Adaptive Level Hardware Evolution for Emergent Fault Tolerant Communication
A permanent physical fault in communication lines usually leads to a failure. The feasibility of evolution of a self organized communication is studied in this paper to defeat this problem. In this case a communication protocol may emerge between blocks and also can adapt itself to environmental changes like physical faults and defects. In spite of faults, blocks may continue to function since ...
متن کاملLossless Microarray Image Compression by Hardware Array Compactor
Microarray technology is a new and powerful tool for concurrent monitoring of large number of genes expressions. Each microarray experiment produces hundreds of images. Each digital image requires a large storage space. Hence, real-time processing of these images and transmission of them necessitates efficient and custom-made lossless compression schemes. In this paper, we offer a new archi...
متن کاملParallel Database Systems: The Future of High Performance Database Processing1
January 1992 Abstract: Parallel database machine architectures have evolved from the use of exotic hardware to a software parallel dataflow architecture based on conventional shared-nothing hardware. These new designs provide impressive speedup and scaleup when processing relational database queries. This paper reviews the techniques used by such systems, and surveys current commercial and rese...
متن کاملCellular DBMS: Customizable and autonomous data management using a RISC-style architecture
Database management systems (DBMS) were developed decades ago with consideration for the legacy hardware and data management requirements. Over years, developments in the hardware and the data management have forced DBMS to grow in functionalities. These functionalities got tightly integrated into the DBMS core because of their monolithic architecture. This has resulted in increased complexity ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2010